ASU researchers discuss the implications of deepfakes

In the era of AI, the most dangerous face might be the one in the mirror


An AI-generated deepfake photo of singer Katy Perry appearing to attend the 2024 Met Gala.

A deepfake image of Katy Perry showing the singer supposedly walking the 2024 Met Gala red carpet was so convincing that even the celebrity’s mother was fooled. Subbarao Kambhampati, a professor of computer science and engineering in the School of Computing and Augmented Intelligence, part of ASU's Ira A. Fulton Schools of Engineering, has been working to raise awareness of this new form of artificial intelligence and inform the public about the potential for its abuse. Graphic generated by an unknown source using AI

|

Plato feared the artist.

The ancient Greek philosopher, the original source of the notion that art imitates life, found imagery, at best, an entertaining illusion — at worst, a dangerous deception.

In her book, “Plato’s Fear,” Ajit Maan writes, “Neither reality, nor reason holds the power that artists do, because artists don’t just reproduce reality; artists provide a new way to view reality.”

Maan is a professor of practice in the School of Politics and Global Studies at Arizona State University. As an expert on defense and security strategy focused on narrative warfare, especially in large-scale conflicts, Maan provides a distinct lens through which to view the rising threat of deepfakes.

Thanks to technology powered by artificial intelligence, or AI, harnessing the power of art to imitate real people, real settings and real life is easier and cheaper than ever.

Deepfakes are a new power, the latest form of what Maan describes as a representational force, a communication mechanism that can be wielded and weaponized.

Deepfakes and deep deception

Deepfakes are fake images, videos or voice recordings. These have become so convincing that an international financial worker was recently tricked into making a $25 million payment to a cybercriminal posing as a company executive during a falsified, live video call.

While the ability to create fake images and video has been around for a while, Subbarao Kambhampati, a professor of computer science and engineering in the School of Computing and Augmented Intelligence, part of the Ira A. Fulton Schools of Engineering at ASU, explains that advancements in AI have made this technology cheap and fast, placing it within easy grasp of scammers. Using tools that harness a type of AI called machine learning, involving banks of computers that can take in images and recordings of an intended target and combine them with datasets about human behavior, bad actors can quickly create realistic mimics of a person’s likeness.

As a thought leader on the principled use and ethical development of AI and a fellow in the Association for the Advancement of Artificial Intelligence, Kambhampati has long been working to raise awareness of this technology. Media outlets such as the New York Times often ask for his perspective on news involving AI; he frequently speaks at conferences and events; and he has been asked to advise the Arizona Supreme Court on the intersection of AI and the law.

“The world is in a period of great change,” Kambhampati says.

Kambhampati notes that society has absorbed these kinds of changes in the past and says the need to secure the digital world mirrors the emergence of efforts to protect property in the real world.

“If I traveled to the 1940s in a time machine and tried to install a burglar alarm in a home, everyone in that era would have regarded it as ridiculous,” he explains. “At some point, people became comfortable with the idea that houses needed more protection and systems arose to meet those needs.”

More concerning than the use of deepfakes for financial scams is the ongoing threat that this new technology poses to 2024 global elections. In April, The Washington Post reported an uptick in the number of electoral deepfakes, noting that efforts had already been made to use fake audio and video recordings to disrupt elections in Taiwan, South Africa and Moldova.

The U.S. Cybersecurity and Infrastructure Security Agency has issued guidance to election officials, saying, “For the 2024 election cycle, generative AI capabilities will likely not introduce new risks, but they may amplify existing risks to election infrastructure.”

Both reports speculate that the source of such deepfakes are state-sponsored actors.

“Undermining public trust in government is high on the to-do list of those forces seeking to destabilize communities, nation-states, even global order,” Maan says.

When seeing is not believing

Experts like Kambhampati endeavor to inform the public about the risks to upcoming elections. The professor has made a variety of appearances to raise public awareness of deepfakes, speaking to local Arizona and national media outlets.

Thanks to efforts such as these, knowledge is increasing. In a survey conducted by the Pew Research Center last year, 42% of Americans demonstrated recognition of deepfakes.

This work is paying off — up to a point.

Kambhampati says that Katy Perry’s own mother was recently fooled by a fake image of the singer supposedly attending the 2024 Met Gala.

And experts worry about studies and research that suggest people might prefer the version of reality provided by deepfakes. They say that technological tools designed to detect deepfakes are coming online, but that might not be enough to stop the spread of misinformation.

Though almost immediately identified and discredited, a deepfake video of House Speaker Nancy Pelosi was still shared via social media more than 2 million times. A recent Wall Street Journal article reported on the use of AI to generate fake nude images of teenagers — suggesting that even when people were made aware the images were fake, they still had negative feelings about the victims.

Michael Barlev, an assistant research professor in the ASU Department of Psychology, has been studying the spread of conspiracy theories and irrational beliefs.

“We can all be fooled by a deepfake if it’s high quality enough,” Barlev says. “But there’s a perhaps less obvious reason deepfakes spread, and it’s one I’ve been especially interested in. Individuals might sometimes be socially motivated to believe the deepfake.”

Barlev says believing and spreading deepfakes can sometimes align with an individual’s existing beliefs or internal goals. This phenomenon is known as confirmation bias.

“Our minds are equipped with lots of psychological tricks — confirmation bias is one of them — which allow us to fulfill social motivations like signaling our group affiliation and commitment, rising in prestige, or derogating disliked individuals and groups,” Barlev says.

The future of the truth

Kambhampati believes we will see increasing use of technological solutions to identify deepfakes, and he plans to continue efforts to educate the public.

“The biggest solution is education,” he says. “We’re going to need to learn not to trust our eyes and ears.”

But confirmation bias might be the toughest obstacle to overcome to stop the spread of misinformation.

“I think deepfakes will remain a serious problem,” Barlev says. “Lots of people are worried about how realistic deepfakes are getting, and I think that’s a real concern. But we should be equally concerned about how deepfakes — realistic or not — are used in socially motivated ways.”

Maan writes that disinformation, like deepfakes, can do something powerful for its intended audience.

“Why does disinformation stick even when it has been proven false? The answer is because the disinformation is more meaningful to the audience than the truth,” she says.

Kambhampati agrees, saying, “In the end, if you want to choose to believe things that aren’t real, computer science can’t help you.”

Perhaps Plato was right to worry.

More Science and technology

 

A cybercriminal wearing a black cloak works at a laptop at an opulent table

New ASU institute to create national cybersecurity hub

In 1788, 13-year-old Eugène-François Vidocq, the son of a wealthy merchant in Arras, France, stole his father’s set of silver plates. And so began a life of crime that resulted in dozens of…

Three people stand on a staircase holding ASU degrees.

From pizza pies to power plays: The rise of an engineering leader

Although he now holds the title of executive vice president of operations for Arizona Public Service Company, or APS, the largest electric company across the state, Jacob Tetlow had a humble…

Close-up photo of a bee's wings.

Quest for microscopy advances aims to boost high-tech capabilities

One method for seeing beyond the capabilities of the human eye is polarimetric imaging. However, the standard design of polarimetric microscopes is bulky, large and hard to use outside of a…